santa clara university
Race and Gender in LLM-Generated Personas: A Large-Scale Audit of 41 Occupations
van der Linden, Ilona, Kumar, Sahana, Dixit, Arnav, Sudan, Aadi, Danda, Smruthi, Anastasiu, David C., Lukoff, Kai
Generative AI tools are increasingly used to create portrayals of people in occupations, raising concerns about how race and gender are represented. We conducted a large-scale audit of over 1.5 million occupational personas across 41 U.S. occupations, generated by four large language models with different AI safety commitments and countries of origin (U.S., China, France). Compared with Bureau of Labor Statistics data, we find two recurring patterns: systematic shifts, where some groups are consistently under- or overrepresented, and stereotype exaggeration, where existing demographic skews are amplified. On average, White (--31pp) and Black (--9pp) workers are underrepresented, while Hispanic (+17pp) and Asian (+12pp) workers are overrepresented. These distortions can be extreme: for example, across all four models, Housekeepers are portrayed as nearly 100\% Hispanic, while Black workers are erased from many occupations. For HCI, these findings show provider choice materially changes who is visible, motivating model-specific audits and accountable design practices.
- Europe > France (0.24)
- Asia > China (0.24)
- Europe > Denmark > Capital Region > Copenhagen (0.14)
- (10 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.93)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Generation (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.66)
Can AI Read Between The Lines? Benchmarking LLMs On Financial Nuance
Kubica, Dominick, Gordon, Dylan T., Emura, Nanami, Saini, Derleen, Goldenberg, Charlie
As of 2025, Generative Artificial Intelligence (GenAI) has become a central tool for productivity across industries. Beyond text generation, GenAI now plays a critical role in coding, data analysis, and research workflows. As large language models (LLMs) continue to evolve, it is essential to assess the reliability and accuracy of their outputs, especially in specialized, high-stakes domains like finance. Most modern LLMs transform text into numerical vectors, which are used in operations such as cosine similarity searches to generate responses. However, this abstraction process can lead to misinterpretation of emotional tone, particularly in nuanced financial contexts. While LLMs generally excel at identifying sentiment in everyday language, these models often struggle with the nuanced, strategically ambiguous language found in earnings call transcripts. Financial disclosures frequently embed sentiment in hedged statements, forward-looking language, and industry-specific jargon, making it difficult even for human analysts to interpret consistently, let alone AI models. This paper presents findings from the Santa Clara Microsoft Practicum Project, led by Professor Charlie Goldenberg, which benchmarks the performance of Microsoft's Copilot, OpenAI's ChatGPT, Google's Gemini, and traditional machine learning models for sentiment analysis of financial text. Using Microsoft earnings call transcripts, the analysis assesses how well LLM-derived sentiment correlates with market sentiment and stock movements and evaluates the accuracy of model outputs. Prompt engineering techniques are also examined to improve sentiment analysis results. Visualizations of sentiment consistency are developed to evaluate alignment between tone and stock performance, with sentiment trends analyzed across Microsoft's lines of business to determine which segments exert the greatest influence.
- Research Report (0.82)
- Financial News (0.72)
Artificial microswimmers can navigate similarly to natural microorganisms, thanks to AI - Dataconomy
Artificial microswimmers that move similarly to naturally occurring swimming microorganisms have recently been the focus of some researchers. Microorganisms are all around us and are closely connected to how people live their daily lives. Microorganisms have piqued the interest of scientists ever since their discovery in the 19th century. They were cultivated for research purposes, but this process is costly and time-consuming. However, high-throughput sequencing technology cannot be developed at the same rate as the culture approach.
- North America > United States > New Jersey (0.07)
- Asia > China > Hong Kong (0.07)
- North America > United States > Pennsylvania (0.05)
- North America > United States > Illinois (0.05)
AI Helps Microrobots Learn to Swim and Navigate
A team of researchers from Santa Clara University, New Jersey Institute of Technology, and the University of Hong Kong have successfully used deep reinforcement learning to teach microrobots how to swim. The new development is a major step forward in microswimming capabilities. Experts have been consistently focused on creating artificial microswimmers that can navigate similarly to naturally-occuring swimming microorganisms, such as bacteria. These microswimmers could be used for a variety of biomedical applications in the future, such as targeted drug delivery and microsurgery. Even with the focus on development, most of today's artificial microswimmers can only perform simple maneuvers with fixed locomotory gaits.
- North America > United States > New Jersey (0.28)
- Asia > China > Hong Kong (0.28)
- North America > United States > Pennsylvania (0.06)
Smart microrobots learn how to swim and navigate with artificial intelligence
Researchers from Santa Clara University, New Jersey Institute of Technology and the University of Hong Kong have been able to successfully teach microrobots how to swim via deep reinforcement learning, marking a substantial leap in the progression of microswimming capability. There has been tremendous interest in developing artificial microswimmers that can navigate the world similarly to naturally-occurring swimming microorganisms, like bacteria. Such microswimmers provide promise for a vast array of future biomedical applications, such as targeted drug delivery and microsurgery. Yet, most artificial microswimmers to date can only perform relatively simple maneuvers with fixed locomotory gaits. In the researchers' study published in Communications Physics, they reasoned microswimmers could learn--and adapt to changing conditions--through AI.
- North America > United States > New Jersey (0.28)
- Asia > China > Hong Kong (0.28)
- North America > United States > Pennsylvania (0.06)
VIT-AP hosts IEEE meet on AI, Signal Processing
Vijayawada: The Second International Conference on Artificial intelligence and Signal processing being organised by the School of Electronics Engineering, VIT-AP University, was inaugurated by chief guest Srinivas Lingam, vice-president, Data Center and AI Group, Intel Corporation. The conference is organised in co-ordination with IEEE Guntur Subsection and Technically Co-sponsored by IEEE Hyderabad section in the presence of guest of honour Prof Tokunbo Ogunfunmi from Santa Clara University, USA virtually on Saturday. Srinivas Lingam in his inaugural address said the focus of this forum is to present the latest developments in the area of Artificial Intelligence and Signal Processing and bring together the researchers and practitioners, from both academia and industry. Prof Tokunbo Ogunfunmi from Santa Clara University, USA said that the conference is also intended to publish visionary papers with unique and novel contributions so that the innovations in this aspect are further accelerated. Dr S V Kota Reddy, vice-chancellor, VIT-AP said the second international conference will conclude on February 14.
- North America > United States (0.53)
- Asia > India (0.08)
Artificial Intelligence and the Human Person
A seminar on Artificial Intelligence (AI) was held at Santa Clara University (Silicon Valley, California) from April 3-5, 2019, sponsored by the China Forum for Civilizational Dialogue (an institution born from the joint commitment of La Civiltà Cattolica and Georgetown University) and the Pontifical Council for Culture. The event was hosted by the Tech & the Human Spirit Initiative at Santa Clara. The meeting brought together, in addition to the two authors of these reflections, another 11 participants, scholars from China, the United States and Europe, to examine how the great changes underway are posing challenges to the Christian and Confucian traditions, as well as to other religious and secular traditions.[1 The enormous progress made in the last 10 years in the field of AI marks a historical discontinuity. China and the West have just begun to address the implications. In the long term, the AI revolution could redefine several fundamental philosophical questions: If machines surpass humans in intelligence, what will become of human uniqueness, dignity and freedom? Will computers become "aware" and "creative"?
- Asia > China (0.66)
- North America > United States > California (0.54)
- Europe (0.24)
- Asia > Singapore (0.04)
- Education (0.88)
- Information Technology (0.66)
Medical Scheduling Software Makes Black Patients Wait Longer In Waiting Rooms Than White Patients
Artificial intelligence is mostly a blessing. But sometimes it can be a curse. New research currently under review at the journal Management Science explores a new instance of an algorithm gone awry. Michele Samorani, an assistant professor at Santa Clara University and lead author of the study, explains exactly how a well-intentioned software program can make racially-biased scheduling decisions. "Millions of medical appointments are scheduled in the United States every year, and a large fraction of them result in no-shows," states Samorani.
On Ethics and Machine Learning
Irina Raicu is the director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University. Over in Santa Clara University's Leavey School of Business, professor Sanjiv Das teaches machine learning to graduate students enrolled in the MS of Information Systems program. As the Spring 2017 quarter was about to start, Subramaniam (Subbu) Vincent (the Tech Lead for the center's Trust Project, and an engineer-journalist with experience in data science) suggested that the two of them might collaborate in an effort to introduce the students to some key questions in data analytics: what do fairness and bias look like in the context of machine learning? And, if bias is detected in a dataset or an algorithm, are there ways to minimize or correct for it? In his hands-on, skill-building course, professor Das asked the students to work in small groups as they practiced predictive modeling on data sets--and proposed the fairness questions as one project option. Five of the groups took him up on the offer.
Social Robots, AI, and Ethics - Resources - Technology Ethics - Focus Areas - Markkula Center for Applied Ethics - Santa Clara University
Currently the world is rapidly developing robotic and artificial intelligence (AI) technologies. These technologies offer enormous potential benefits, yet there are also drawbacks and dangers. Using the Ethics Center's Framework for Ethical Decision Making, we can consider some of the ethical issues involved with Robots and AI. Utilitarianism is a form of moral reasoning which emphasizes the consequences of actions. Typically it tries to maximize happiness and minimize suffering, though there are other ways to use utilitarian evaluation such as cost-benefit analysis.